| Andrew Heiss | A. Jordan Nafa |
|---|---|
| Georgia State University | University of North Texas |
While the past two decades have been characterized by considerable progress in developing approaches to causal inference in situations where true experimental manipulation is either impractical or impossible, as is the case in much of political science, commonly employed approaches have developed largely within the frameworks of classical econometrics and frequentist non-parametrics (Blackwell and Glynn 2018). Unfortunately, these frameworks are limited in their ability to answer many of the questions scholars of international relations and comparative politics are often interested in since they rely heavily upon the assumption of long-run replication rather than quantifying uncertainty directly (Gill 1999; Gill and Heuberger 2020; Schrodt 2014; Western and Jackman 1994). In this article we develop a Bayesian approach to the estimation of marginal structural models for causal inference with cross-sectional time series and panel data. We assess the proposed models’ performance relative to existing procedures in a simulation study and two empirical examples, demonstrating that our approach performs well in terms of recovering the true parameter values while also lending itself to a more direct and intuitive interpretation. To ensure accessibility, we provide a flexible implementation of the proposed model in the R package brms (Bürkner 2017, 2018).
|
Introduced to political science in a series of articles by Bruce Western and Simon Jackman (Jackman 2000, 2004; Western 1998; Western and Jackman 1994), Bayesian inference provides several advantages over the frequentist paradigm, particularly for observational research in comparative politics and international relations where the logic of classical inference is often difficult to defend in practice (Gill 1999; Gill and Heuberger 2020; Schrodt 2014; Western and Jackman 1994). Paired with significant advances in computational power, the development of efficient Markov Chain Monte Carlo (MCMC) algorithms and user-friendly open-source software packages has made Bayesian inference widely accessible to researchers in the social sciences (Bürkner 2017, 2018; Goodrich et al. 2020). Despite its popularity in the development of measurement models (Claassen 2019, 2020; Clinton, Jackman, and Rivers 2004; Juhl 2018; Marquardt and Pemstein 2018), the application of Bayesian approaches to hypothesis testing remains relatively rare and is generally limited to circumstances where appropriate frequentist models cannot be applied such as in the presence of complete separation in discrete choice models (Gelman et al. 2008; Rainey 2016). Since some readers may be unfamiliar with the logic and principles of Bayesian methods, this section provides a brief overview of contemporary Bayesian inference and its advantages over the null hypothesis significance testing paradigm that remains dominant in political science.1
Following a series of contentious debates between Ronald Fisher, Jersey Neyman, Egon Pearson and other early twentieth century statisticians (Fisher 1930, 1935; Neyman and Pearson 1933a, 1933b), the null hypothesis significance testing (NHST) paradigm emerged from an attempt to merge two fundamentally incompatible approaches to scientific inference–Neyman and Pearson’s null hypothesis test and Fisher’s test of significance (Gill 1999). Neyman and Pearson’s null hypothesis test relies on the logic of modus tollens or proof by contradiction
\[ \begin{aligned} y_{i} & \sim \mathcal{N}(\mu, \sigma)^{\tilde{w_{i}}}\\ \mu & = \alpha + X_{n}\beta_{k} + \sigma\\ \textit{where}\\ \alpha & \sim \mathcal{Student \, T}(\nu_{\alpha}, \, \mu_{y}, \, \sigma_{y})\\ \beta_{k} & \sim \mathcal{MVN}(0, \, \Sigma_{\beta})\\ \sigma & \sim \mathcal{Student \, T}_{+}(3, \, 0, \, \sigma_{y})\\ \end{aligned} \]
It is worth noting here that there is relatively little agreement among statisticians regarding the application of Bayesian estimation and the meaning of the term often varies substantially within and across disciplines. Our definition herein follows that of Gelman and Shalizi (2012) and McElreath (2020) that views priors as assumptions about the universe of plausible effect sizes and emphasizes aggressive model checking as opposed to the philosophical Bayesian view of priors as entirely subjective beliefs.↩︎